58 research outputs found

    Sensory Properties in Fusion of Visual/Haptic Stimuli Using Mixed Reality

    Get PDF
    When we recognize objects, multiple sensory informa-tion (e.g., visual, auditory, and haptic) is used with fusion. For example, both eyes and hands provide rele-vant information about an object’s shape. We investi-gate how sensory stimuli interact with each other. For that purpose, we developed a system that gives hap-tic/visual sensory fusion using a mixed reality tech-nique. Our experiments show that the haptic stimulus seems to be affected by visual stimulus when a dis-crepancy exists between vision and haptic stimuli

    DETECTING WALKABLE PLANE AREAS BY USING RGB-D CAMERA AND ACCELEROMETER FOR VISUALLY IMPAIRED PEOPLE

    Get PDF
    When visually impaired person has to walk out, they have to use white canes, but the range that can be scanned by a white cane is not long enough to walk safely. We propose to detect walkable plane areas on road surface by using the RGB-D camera and the accelerometer in the tablet terminal that is attached to the RGB-D camera. Our approach can detect plane areas in longer distance than a white cane. It is achieved by using height information from the ground and normal vectors of the surface calculated from a depth image obtained by the RGB-D camera in real time.Published in: 2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) Date of Conference: 7-9 June 2017 Conference Location: Copenhagen, Denmar

    Calibration of Multiple Sparsely Distributed Cameras Using a Mobile Camera

    Get PDF
    In sports science research, there are many topics that utilize the body motion of athletes extracted by motion capture system, since motion information is valuable data for improving an athlete’s skills. However, one of the unsolved challenges in motion capture is extraction of athletes’ motion information during the actual game or match, as placing markers on athletes is a challenge during game play. In this research, the authors propose a method for acquisition of motion information without attaching a marker, utilizing computer vision technology. In the proposed method, the three-dimensional world joint position of the athlete’s body can be acquired using just two cameras without any visual markers. Furthermore, the athlete’s three-dimensional joint position during game play can also be obtained without complicated preparations. Camera calibration that estimates the projective relationship between three-dimensional world and two-dimensional image spaces is one of the principal processes for the respective three-dimensional image processing, such as three-dimensional reconstruction and three-dimensional tracking. A strong-calibration method, which needs to set up landmarks with known three-dimensional positions, is a common technique. However, as the target space expands, landmark placement becomes increasingly complicated. Although a weak-calibration method does not need known landmarks, the estimation precision depends on the accuracy of the correspondence between image captures. When multiple cameras are arranged sparsely, sufficient detection of corresponding points is difficult. In this research, the authors propose a calibration method that bridges multiple sparsely distributed cameras using mobile camera images. Appropriate spacing was confirmed between the images through comparative experiments evaluating camera calibration accuracy by changing the number of bridging images. Furthermore, the proposed method was applied to multiple capturing experiments in a large-scale space to verify its robustness. As a relevant example, the proposed method was applied to the three-dimensional skeleton estimation of badminton players. Subsequently, a quantitative evaluation was conducted on camera calibration for the three-dimensional skeleton. The reprojection error of each part of the skeletons and standard deviations were approximately 2.72 and 0.81 mm, respectively, confirming that the proposed method was highly accurate when applied to camera calibration. Consequently, a quantitative evaluation was conducted on the proposed calibration method and a calibration method using the coordinates of eight manual points. In conclusion, the proposed method stabilizes calibration accuracy in the vertical direction of the world coordinate system

    Smoothly Switching Method of Asynchronous Multi-View Videos Using Frame Interpolation

    Get PDF
    This paper proposes a method that generates viewpoint smooth switching by reducing the flickering artifact observed at bullet-times generated from asynchronous multi-view videos using frame interpolation processing. When we asynchronously capture multi-view videos of an object moving at high velocity, deviations occur in the observed position at the bullet-times. We apply a frame interpolation technique to reduce the problem. By selecting suitable interpolated images that produce the smallest movement of the subject\u27s observed position, we smoothly generate viewpoint switched bullet-time video.Published in: 2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) Date of Conference: 7-9 June 2017 Conference Location: Copenhagen, Denmar

    Pseudo-Dolly-In Video Generation Combining 3D Modeling and Image Reconstruction

    Get PDF
    This paper proposes a pseudo-dolly-in video generation method that reproduces motion parallax by applying image reconstruction processing to multi-view videos. Since dolly-in video is taken by moving a camera forward to reproduce motion parallax, we can present a sense of immersion. However, at a sporting event in a large-scale space, moving a camera is difficult. Our research generates dolly-in video from multi-view images captured by fixed cameras. By applying the Image-Based Modeling technique, dolly-in video can be generated. Unfortunately, the video quality is often damaged by the 3D estimation error. On the other hand, Bullet-Time realizes high-quality video observation. However, moving the virtual-viewpoint from the capturing positions is difficult. To solve these problems, we propose a method to generate a pseudo-dolly-in image by installing 3D estimation and image reconstruction techniques into Bullet-Time and show its effectiveness by applying it to multi-view videos captured at an actual soccer stadium. In the experiment, we compared the proposed method with digital zoom images and with the dolly-in video generated from the Image-Based Modeling and Rendering method.Published in: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) Date of Conference: 9-13 Oct. 2017 Conference Location: Nantes, Franc

    Method to Generate Disaster-Damage Map using 3D photometry and Crowd Sourcing

    Get PDF
    Thanks to the rapid progress of the Internet and mobile devices, information related to disaster areas can be collected through the Internet. To grasp the degree of damage in a disaster situation, the use of crowdsourcing for coordinating the individual efforts (micro tasks) of an enormous number of users (workers) on the Internet has been drawing attention as a means of quickly solving problems. However, the information gathered from the Internet is huge and diverse, so it is difficult to formulate as a crowdsourcing task. This paper proposes a conversion platform for the images of a disaster site photographed by various users as information about the site, integrating the images into a single map using 3D image processing, and providing the map to crowdsourcing as a micro task.Published in: 2017 IEEE International Conference on Big Data (Big Data) Date of Conference: 11-14 Dec. 2017 Conference Location: Boston, MA, US

    Visual Tracking Method of a Quick and Anomalously Moving Badminton Shuttlecock

    Get PDF
    This paper introduces a method that uses multiple-view videos to estimate the 3D position of a badminton shuttle that moves quickly and anomalously. When an object moves quickly, it is observed with a motion blur effect. By utilizing the information provided by the shape of the motion blur region, we propose a visual tracking method for objects that have an erratic and drastically changing moving speed. When the speed increases tremendously, we propose another method, which applies the shape-from-silhouette technique, to estimate the 3D position of a moving shuttlecock using unsynchronized multiple-view videos. We confirmed the effectiveness of our proposed technique using video sequences and a CG simulation image set

    Mutual superimposing of SAR and ground-level shooting images mediated by intermediate multi-altitude images

    Get PDF
    When satellite-based SAR (Synthetic Aperture Radar) images and images acquired from the ground are registered, they offer a wealth of information such as topographic, vegetation or water surface to be extracted from the ground-level shooting images. Simultaneously, high temporal-resolution and high spatial-resolution information obtained by the ground-level shooting images can be superimposed on satellite images. However, due to the differences in imaging modality, spatial resolutions, and observation angle, it was not easy to directly extract the corresponding points between them. This paper proposes an image registration method to estimate the correspondence between SAR images and ground-level shooting images through a set of multi-altitude images taken at different heights
    corecore